Previous Blogs

May 9, 2023
IBM Unleashes Generative AI Strategy With watsonx

May 4, 2023
Amazon’s Generative AI Strategy Focuses on Choice

April 20, 2023
Latest Cadence Tools Bring Generative AI to Chip and System Design

March 30, 2023
Amazon Enables Sidewalk Network for IoT Applications

March 16, 2023
Microsoft 365 Copilot Enables the Digital Assistants We’ve Always Wanted

March 14, 2023
Google Unveils Generative AI Tools for Workspace and GCP

March 9, 2023
Lenovo Revs Desktop Workstations with Aston Martin

March 1, 2023
MWC Analysis: The Computerized, Cloudified 5G Network is Getting Real

February 23, 2023
Early MWC News Shows Renewed Emphasis on 5G Infrastructure

February 1, 2023
Samsung Looking to Impact the PC Market

January 18, 2023
The Surprise Winner for Generative AI

January 5, 2023
AI To Go Mainstream in 2023

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

May 23, 2023
Dell and Nvidia Partner to Create Generative AI Solutions for Businesses

By Bob O'Donnell

As recent announcements demonstrate, the world is awash with new offerings targeted at bringing generative AI capabilities to businesses. From IBM to Google, Salesforce, Microsoft, Amazon and Meta, every tech company, it seems, is trying to take advantage of the excitement around this transformational new technology.

And with good reason, because it’s also become increasingly clear that most organizations are eager to embrace it. Businesses are quickly identifying the potential productivity enhancements, efficiencies, and other benefits it can enable. One big problem, however, is that most companies aren’t entirely sure how they can start to leverage generative AI. People with deep knowledge of how the technology works and how it can be implemented are still very few and far between (not to mention, very expensive).

Recognizing this disconnect, Dell Technologies and Nvidia have put together an offering called Project Helix that’s specifically designed to make the process of getting started with generative AI much easier. Project Helix is focused on creating full stack on-premises based generative AI solutions that allow companies to either build new or customize existing generative AI foundation models utilizing their own data.

One of the problems that quickly popped up in businesses that had started to use generative AI services is the leakage of internal IP. In fact, several companies—including Samsung and, most recently, Apple—have implemented policies that prevent their employees from using things like ChatGPT for work purposes because of fears related to this issue.

Part of the reason for this concern is that virtually all the early instances of generative AI could only run in huge cloud-based datacenters, and many of them collected the data that were entered into their prompt inputs. In the blindingly fast evolution of the foundation models that underly generative AI applications, however, a number of these concerns have already been addressed. One of the biggest is that there is now a huge range of open-source models from marketplaces like Hugging Face. Many of these open-source models can run very efficiently with more reasonable computing requirements, such as those in an appropriately equipped on-prem datacenter. On top of that, we’ve started to see some of the big tech companies start to shift the rules about where their models can be run. They’re also creating smaller versions of their models that are optimized to run on site.

Additionally, we’ve seen several companies, including Nvidia, start to offer models that are specifically designed for enterprise applications. The Nvidia development is interesting on several levels. First, of course, is the fact that while the company is certainly strongly associated with generative AI, it has been exclusively because of its hardware. The company’s GPU chips power a large majority of the current generative AI applications and services in the cloud. At the company’s last GTC conference in March, however, they surprised most everyone by unveiling an entire range of generative AI-related software. In particular, they unveiled industry-specific software foundation models and enterprise-focused development tools, including its NeMo large language model (LLM) frameworks and NeMo Guardrails for filtering out unwanted topics. One thing that wasn’t surprising is that these models were optimized to run on Nvidia hardware.

With Project Helix, what Dell Technologies and Nvidia have done is put together a range of Dell PowerEdge server systems that include Nvidia H100 GPUs and Nvidia’s line of Bluefield DPUs (data processing units, used for the high-speed interconnects between servers that AI workloads demand) and bundled them with Nvidia’s Enterprise AI software. In addition, Dell offers several different storage options from its PowerScale and ECS Enterprise Object Storage lines that are optimized for these type of AI workloads. The result is a full “solution” that lets companies get started with building or customizing generative AI models. Potential customers can either use one of the Nvidia foundation model options or, if they prefer, select an open-source model from Hugging Face (or a solution from another tech provider) and start the process.

The bundled Nvidia software includes the ability to import an organization’s existing corpus of data, ranging from documents, customer service chats, social media posts, and much more, and then use that to either train a brand-new model or customize an existing one. Once the training process is complete, the tools necessary to run inferences and create new applications that leverage the newly trained model are included as well. Part of the bundle from Dell also includes a blueprint for helping companies walk through the process of creating/customizing these models and building these tools, as well as a range of technical support services.

Best of all, because this work is being done internally, Project Helix can help manage the IP leakage issues that many companies—even the ones that have started working with the generative AI tools—are concerned about.

Another important benefit of Project Helix is that it lets companies start to leverage generative AI in a more unique and personalized way. While the general-purpose tools currently available can definitely help for certain types of applications and environments, most companies recognize that the real competitive benefit of generative AI lies in customization. As a result, there’s a great deal of interest in incorporating a company’s own data into these tools, but again, there’s a lot of confusion on how exactly to do that.

Putting together an “easy kit” for generative AI doesn’t mean many organizations won’t face challenges in leveraging their data and the technology to create the solutions they need. Let’s not forget that concepts behind generative AI are still very new, and it’s an extremely complex technology. Nevertheless, by bundling the necessary hardware and software that’s been pretested to work together, along with information on how to work through the process, Project Helix looks to be an attractive option for organizations that are eager—or feel competitively compelled—to dive into this exciting new world.

Here’s a link to the original article: https://seekingalpha.com/article/4606689-dell-nvidia-partner-generative-ai-solutions-businesses

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.